Chaorui Deng*, Deyao Zhu*, Kunchang Li*, Chenhui Gou*, Feng Li*, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, Guang Shi 📧 , Haoqi Fan* 🎩
contact: shiguang.sg@bytedance.com
We present BAGEL, an open‑source multimodal foundation model with 7B active parameters (14B total) trained on large‑scale interleaved multimodal data. BAGEL outperforms the current top‑tier open‑source VLMs like Qwen2.5-VL and InternVL-2.5 on standard multimodal understanding leaderboards, and delivers text‑to‑image quality that is competitive with strong specialist generators such as SD3. Moreover, BAGEL demonstrates superior qualitative results in classical image‑editing scenarios than the leading open-source models. More importantly, it extends to free-form visual manipulation, multiview synthesis, and world navigation, capabilities that constitute "world-modeling" tasks beyond the scope of previous image-editing models. The figure below showcases BAGEL's qualitative performance.
We sincerely thank all contributors from the open community for their valuable support.
- May 30, 2025: Many thanks to @prartio for contributing the Windows 11 installation guideline, and to @gluttony-10 for his work on the inference of quantization.
- May 29, 2025: Special thanks to @jnc-nj for contributing the Dockerfile.
- May 26, 2025: Thanks to @neverbiasu for contributing ComfyUI.
- May 25, 2025: Special thanks to @LeanModels for providing the DF11-compressed version, and to @Gapeleon for the INT8-compressed version. We also appreciate @gluttony-10 for contributions to the Windows package.
- May 24, 2025: Together with @wangwei1237, @gluttony-10, and @KingNish24, we built a Gradio app and launched a Hugging Face Space.
- May 23, 2025: We have provided a training guideline in TRAIN.
- May 20, 2025: We released the official website, demo, model, and report for BAGEL.
Call for Bad Cases: If you have encountered any cases where the model performs poorly, we would greatly appreciate it if you could share them in the issue#11 or Discord.
About Inference Hyperparameters:
cfg_text_scale
: Controls how strongly the model follows the text prompt.1.0
disables text guidance. Typical range:4.0–8.0
.cfg_image_scale
: Controls how much the model preserves input image details.1.0
disables image guidance. Typical range:1.0–2.0
.cfg_interval
: Fraction of denoising steps where CFG is applied. Later steps can skip CFG to reduce computation. Typical:[0.4, 1.0]
.timestep_shift
: Shifts the distribution of denoising steps. Higher values allocate more steps at the start (affects layout); lower values allocate more at the end (improves details).num_timesteps
: Total denoising steps. Typical:50
.cfg_renorm_min
: Minimum value for CFG-Renorm.1.0
disables renorm. Typical:0
.cfg_renorm_type
: CFG-Renorm method:global
: Normalize over all tokens and channels (default for T2I).channel
: Normalize across channels for each token.text_channel
: Likechannel
, but only applies to text condition (good for editing, may cause blur).
- If edited images appear blurry, try
global
CFG-Renorm, decreasecfg_renorm_min
or decreasecfg_scale
.
1️⃣ Set up environment
git clone https://github.com/bytedance-seed/BAGEL.git
cd BAGEL
conda create -n bagel python=3.10 -y
conda activate bagel
pip install -r requirements.txt
pip install flash_attn==2.5.8 --no-build-isolation
2️⃣ Download pretrained checkpoint
from huggingface_hub import snapshot_download
save_dir = "models/BAGEL-7B-MoT"
repo_id = "ByteDance-Seed/BAGEL-7B-MoT"
cache_dir = save_dir + "/cache"
snapshot_download(cache_dir=cache_dir,
local_dir=save_dir,
repo_id=repo_id,
local_dir_use_symlinks=False,
resume_download=True,
allow_patterns=["*.json", "*.safetensors", "*.bin", "*.py", "*.md", "*.txt"],
)
3️⃣ Use Gradio WebUI to start playing with BAGEL!
# For 32GB+ VRAM GPU or multi GPUs.
python app.py
# For 12~32GB VRAM GPU, recommend using NF4 quantization. And use Chinese interface.
python app.py --mode 2 --zh
# For 22~32GB VRAM GPU, not recommended to use INT8 quantization.
python app.py --mode 3
bash scripts/train.sh
You can replace the variables in the script with your own before running. See TRAIN for more details.
We provide the scripts for evaluating VLM, T2I and Editing benchmarks. Please See EVAL for more details.
Model | MME ↑ | MMBench ↑ | MMMU ↑ | MM-Vet ↑ | MathVista ↑ |
---|---|---|---|---|---|
Janus-Pro-7B | - | 79.2 | 41.0 | 50.0 | – |
Qwen2.5-VL-7B | 2347 | 83.5 | 58.6 | 67.1 | 68.2 |
BAGEL | 2388 | 85.0 | 55.3 | 67.2 | 73.1 |
Model | GenEval ↑ | WISE ↑ |
---|---|---|
Janus-Pro-7B | 0.80 | 0.35 |
SD3-Medium | 0.74 | - |
FLUX-1-dev | 0.82 | 0.50 |
BAGEL | 0.82 | 0.52 |
BAGEL + Rewritter/CoT | 0.88 | 0.70 |
Model | GEdit-Bench-EN (SC) ↑ | GEdit-Bench-EN (PQ) ↑ | GEdit-Bench-EN (O) ↑ | IntelligentBench ↑ |
---|---|---|---|---|
Step1X-Edit | 7.09 | 6.76 | 6.70 | 14.9 |
Gemini 2.0 | 6.73 | 6.61 | 6.32 | 57.6 |
BAGEL | 7.36 | 6.83 | 6.52 | 44.0 |
BAGEL+CoT | – | – | – | 55.3 |
@article{deng2025bagel,
title = {Emerging Properties in Unified Multimodal Pretraining},
author = {Deng, Chaorui and Zhu, Deyao and Li, Kunchang and Gou, Chenhui and Li, Feng and Wang, Zeyu and Zhong, Shu and Yu, Weihao and Nie, Xiaonan and Song, Ziang and Shi, Guang and Fan, Haoqi},
journal = {arXiv preprint arXiv:2505.14683},
year = {2025}
}
BAGEL is licensed under the Apache 2.0.